home *** CD-ROM | disk | FTP | other *** search
- #
- #
- #---- example3.lf
- #
- #
- #---- This is an lf example that learns the AND
- #---- function of two binary inputs.
- #
- #---- The trees are saved into the file example3.tre
- #---- The encodings are saved into the file example3.cod
- #---- Note that a saved set of trees must be accompanied
- #---- by its corresponding encodings if the tree is to function
- #---- properly in future trials where the trees are loaded
- #---- instead of generated.
- #
-
- #---- Specify tree statements.
- tree
-
- #---- Train on trees of 512 leaves.
- size = 512
-
- #---- Use a majority vote of 3 trees to promote accuracy.
- #---- If your ALN's are not generalizing well, try increasing
- #---- the vote parameter from 1 to any odd number.
- #---- Depending on the noisiness of your data, you may need to
- #---- set this all the way up to 31, but the usual number that works well
- #---- is 7. In this example, the ALN's don't need votes to generalize the
- #---- pattern well. The statement is here simply as an illustration.
- vote = 3
-
- #---- Train until we get 4 elements of the training set right
- min correct = 4
-
- #---- or until 10 epochs have passed.
- max epochs = 10
-
- #---- Output the tree for later retrieval
- #---- We could also use "save folded tree to", but folded trees
- #---- may not re-train very well in the future.
- save tree to "example3.tre"
-
- #---- Specify function statements.
- function
-
- #---- Domain dimension MUST be the first statement, followed
- #---- by the codomain dimension statement.
- domain dimension = 2
-
- #---- There is only 1 codimension.
- codomain dimension = 1
-
- #---- Coding output will be saved for use with the trees we are saving.
- save coding to "example3.cod"
-
- #---- All dimensions and codimensions are boolean, so specify
- #---- bits:stepsize for the encoding of input and output.
- coding = 1:1 1:1 1:1
-
- #---- Boolean values have 2 quantization levels.
- quantization = 2 2 2
-
- #---- Optional specifications of the largest values in the 5 encodings;
- #---- if not specified, then the largest value in the training and test set
- #---- is used.
- largest = 1 1 1
-
- #---- Optional specifications of the smallest values in the 5 encodings;
- #---- if not specified, then the smallest value in the training and test set
- #---- is used.
- #---- Note that the smallest values may not equal the largest values.
- smallest = 0 0 0
-
- #---- There are four rows in our training set.
- training set size = 4
- training set =
-
- # A B A and B
- 1 1 1
- 1 0 0
- 0 1 0
- 0 0 0
-
- #---- We will test on the following 4 vectors.
- test set size = 4
- test set =
-
- # A B A and B
- 1 1 1
- 1 0 0
- 0 1 0
- 0 0 0
-
-
- #---- The following output file should be generated:
- #---- The first line indicates how many codomains there are.
- #---- The next four lines represent each of the four lines in the test set.
- #---- Each value is followed by its corresponding quantization number
- #---- in the prescribed encoding scheme. Each codomain is followed
- #---- by the corresponding result from the ALN's, along with its quantization
- #---- number. Remember, it's not the calculated value that is as important
- #---- as the calculated quantization level. You can get more accurate values
- #---- by tightening up the encoding.
-
- #---- After the results is the error histogram, which counts,
- #---- for each of the codomains, the number of times the result quantization
- #---- level differed from the actual quantization level by n. In this example,
- #---- the ALN's executed the test set perfectly, so there are 4 counts for
- #---- errors of n = 0 in the codomain.
-
- # A B A and B A and B result
-
- #1
- #1.000000 1 1.000000 1 1.000000 1 1.000000 1
- #1.000000 1 0.000000 0 0.000000 0 0.000000 0
- #0.000000 0 1.000000 1 0.000000 0 0.000000 0
- #0.000000 0 0.000000 0 0.000000 0 0.000000 0
- #
- #ERROR HISTOGRAM
- #0 errors 4
- #1 errors 0
- #2 errors 0
- #3 errors 0
- #4 errors 0
- #5 errors 0
- #6 errors 0
- #7 errors 0
- #8 errors 0
- #9+ errors 0
-